A generalized convergence theorem for neural networks

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A generalized convergence theorem for neural networks

New sampling theorems are developed for isotropic randomfields and their associated Fourier coefficient processes. A wavenumber-limited isotropic random field z ( ? ) is considered whose spectral densityfunction is zero outside a disk of radius B centered at the origin of the Manuscript received September 3, 1986; revised February 19, 1988. Thiswork was supported in part by the ...

متن کامل

A computational model and convergence theorem for rumor dissemination in social networks

The spread of rumors, which are known as unverified statements of uncertain origin, may threaten the society and it's controlling, is important for national security councils of countries. If it would be possible to identify factors affecting spreading a rumor (such as agents’ desires, trust network, etc.) then, this could be used to slow down or stop its spreading. Therefore, a computational m...

متن کامل

Learning in neural networks based on a generalized fluctuation theorem.

Information maximization has been investigated as a possible mechanism of learning governing the self-organization that occurs within the neural systems of animals. Within the general context of models of neural systems bidirectionally interacting with environments, however, the role of information maximization remains to be elucidated. For bidirectionally interacting physical systems, universa...

متن کامل

GENERALIZED PRINCIPAL IDEAL THEOREM FOR MODULES

The Generalized Principal Ideal Theorem is one of the cornerstones of dimension theory for Noetherian rings. For an R-module M, we identify certain submodules of M that play a role analogous to that of prime ideals in the ring R. Using this definition, we extend the Generalized Principal Ideal Theorem to modules.

متن کامل

A representer theorem for deep neural networks

We propose to optimize the activation functions of a deep neural network by adding a corresponding functional regularization to the cost function. We justify the use of a second-order total-variation criterion. This allows us to derive a general representer theorem for deep neural networks that makes a direct connection with splines and sparsity. Specifically, we show that the optimal network c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Information Theory

سال: 1988

ISSN: 0018-9448,1557-9654

DOI: 10.1109/18.21239